1,913 research outputs found

    Challenges in computational lower bounds

    Full text link
    We draw two incomplete, biased maps of challenges in computational complexity lower bounds

    Fourier Conjectures, Correlation Bounds, and Majority

    Get PDF

    New Sampling Lower Bounds via the Separator

    Get PDF
    Suppose that a target distribution can be approximately sampled by a low-depth decision tree, or more generally by an efficient cell-probe algorithm. It is shown to be possible to restrict the input to the sampler so that its output distribution is still not too far from the target distribution, and at the same time many output coordinates are almost pairwise independent. This new tool is then used to obtain several new sampling lower bounds and separations, including a separation between AC0 and low-depth decision trees, and a hierarchy theorem for sampling. It is also used to obtain a new proof of the Patrascu-Viola data-structure lower bound for Rank, thereby unifying sampling and data-structure lower bounds

    Is It Real, or Is It Randomized?: A Financial Turing Test

    Full text link
    We construct a financial "Turing test" to determine whether human subjects can differentiate between actual vs. randomized financial returns. The experiment consists of an online video-game (http://arora.ccs.neu.edu) where players are challenged to distinguish actual financial market returns from random temporal permutations of those returns. We find overwhelming statistical evidence (p-values no greater than 0.5%) that subjects can consistently distinguish between the two types of time series, thereby refuting the widespread belief that financial markets "look random." A key feature of the experiment is that subjects are given immediate feedback regarding the validity of their choices, allowing them to learn and adapt. We suggest that such novel interfaces can harness human capabilities to process and extract information from financial data in ways that computers cannot.Comment: 12 pages, 6 figure

    A Computational View of Market Efficiency

    Get PDF
    We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be \emph{efficient with respect to resources SS} (e.g., time, memory) if no strategy using resources SS can make a profit. As a first step, we consider memory-mm strategies whose action at time tt depends only on the mm previous observations at times tm,...,t1t-m,...,t-1. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory mm can lead to "market conditions" that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory m>mm' > m to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms

    On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

    Get PDF
    The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms

    INTERLEAVED GROUP PRODUCTS

    Get PDF
    Let GG be the special linear group SL(2,q)\mathrm{SL}(2,q). We show that if (a1,,at)(a_1,\ldots,a_t) and (b1,,bt)(b_1,\ldots,b_t) are sampled uniformly from large subsets AA and BB of GtG^t then their interleaved product a1b1a2b2atbta_1 b_1 a_2 b_2 \cdots a_t b_t is nearly uniform over GG. This extends a result of the first author, which corresponds to the independent case where AA and BB are product sets. We obtain a number of other results. For example, we show that if XX is a probability distribution on GmG^m such that any two coordinates are uniform in G2G^2, then a pointwise product of ss independent copies of XX is nearly uniform in GmG^m, where ss depends on mm only. Extensions to other groups are also discussed. We obtain closely related results in communication complexity, which is the setting where some of these questions were first asked by Miles and Viola. For example, suppose party AiA_i of kk parties A1,,AkA_1,\dots,A_k receives on its forehead a tt-tuple (ai1,,ait)(a_{i1},\dots,a_{it}) of elements from GG. The parties are promised that the interleaved product a11ak1a12ak2a1takta_{11}\dots a_{k1}a_{12}\dots a_{k2}\dots a_{1t}\dots a_{kt} is equal either to the identity ee or to some other fixed element gGg\in G, and their goal is to determine which of the two the product is equal to. We show that for all fixed kk and all sufficiently large tt the communication is Ω(tlogG)\Omega(t \log |G|), which is tight. Even for k=2k=2 the previous best lower bound was Ω(t)\Omega(t). As an application, we establish the security of the leakage-resilient circuits studied by Miles and Viola in the "only computation leaks" model

    Short PCPs with projection queries

    Get PDF
    We construct a PCP for NTIME(2 n) with constant soundness, 2 n poly(n) proof length, and poly(n) queries where the verifier’s computation is simple: the queries are a projection of the input randomness, and the computation on the prover’s answers is a 3CNF. The previous upper bound for these two computations was polynomial-size circuits. Composing this verifier with a proof oracle increases the circuit-depth of the latter by 2. Our PCP is a simple variant of the PCP by Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (CCC 2005). We also give a more modular exposition of the latter, separating the combinatorial from the algebraic arguments. If our PCP is taken as a black box, we obtain a more direct proof of the result by Williams, later with Santhanam (CCC 2013) that derandomizing circuits on n bits from a class C in time 2 n /n ω(1) yields that NEXP is not in a related circuit class C ′. Our proof yields a tighter connection: C is an And-Or of circuits from C ′. Along the way we show that the same lower bound follows if the satisfiability of the And of any 3 circuits from C ′ can be solved in time 2 n /n ω(1). ∗The research leading to these results has received funding from the European Community’

    Mixing in Non-Quasirandom Groups

    Get PDF
    We initiate a systematic study of mixing in non-quasirandom groups. Let A and B be two independent, high-entropy distributions over a group G. We show that the product distribution AB is statistically close to the distribution F(AB) for several choices of G and F, including: 1) G is the affine group of 2x2 matrices, and F sets the top-right matrix entry to a uniform value, 2) G is the lamplighter group, that is the wreath product of ?? and ?_{n}, and F is multiplication by a certain subgroup, 3) G is H? where H is non-abelian, and F selects a uniform coordinate and takes a uniform conjugate of it. The obtained bounds for (1) and (2) are tight. This work is motivated by and applied to problems in communication complexity. We consider the 3-party communication problem of deciding if the product of three group elements multiplies to the identity. We prove lower bounds for the groups above, which are tight for the affine and the lamplighter groups
    corecore